41 research outputs found

    Sparse coding

    Get PDF
    The(frequently updated) original version is avalable at http://www.scholarpedia.org/article/Sparse_codingMammalian brains consist of billions of neurons, each capable of independent electrical activity. Information in the brain is represented by the pattern of activation of this large neural population, forming a neural code. The neural code defines what pattern of neural activity corresponds to each represented information item. In the sensory system, such items may indicate the presence of a stimulus object or the value of some stimulus parameter, assuming that each time this item is represented the neural activity pattern will be the same or at least similar. One important and relatively simple property of this code is the fraction of neurons that are strongly active at any one time. For a set of N binary neurons (which can either be 'active' or 'inactive'), the average (i.e., expected value) of this fraction across all information items is the sparseness of the code. This average fraction can vary from close to 0 to about 1/2. Average fractions above 1/2 can always be decreased below 1/2 without loss of information by replacing each active neuron with an inactive one, and vice versa. Sparse coding is the representation of items by the strong activation of a relatively small set of neurons. For each stimulus, this is a different subset of all available neurons

    Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

    Get PDF
    The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions.Postprin

    Exact Bayesian bin classification:a fast alternative to Bayesian classification and its application to neural response analysis

    No full text
    We investigate the general problem of signal classification and, in particular, that of assigning stimulus labels to neural spike trains recorded from single cortical neurons. Finding efficient ways of classifying neural responses is especially important in experiments involving rapid presentation of stimuli. We introduce a fast, exact alternative to Bayesian classification. Instead of estimating the class-conditional densities p(x|y) (where x is a scalar function of the feature[s], y the class label) and converting them to P(y|x) via Bayes' theorem, this probability is evaluated directly and without the need for approximations. This is achieved by integrating over all possible binnings of x with an upper limit on the number of bins. Computational time is quadratic in both the number of observed data points and the number of bins. The algorithm also allows for the computation of feedback signals, which can be used as input to subsequent stages of inference, e.g. neural network training. Responses of single neurons from high-level visual cortex (area STSa) to rapid sequences of complex visual stimuli are analysed. Information latency and response duration increase nonlinearly with presentation duration, suggesting that neural processing speeds adapt to presentation speeds.</p

    Unsupervised learning of coordinate transformations using temporal coherence

    No full text
    The visual system converts location information from retinal to several other more relevant coordinate systems. This mechanism has been modelled previously by neural networks trained with supervised and reinforcement learning. Here, we use a simple and biologically plausible trace-Hebbian unsupervised learning mechanism based on the principle of temporal coherence (Fƶldiak, 1991 Neural Computation 3 194 - 200). This algorithm had been used previously to learn invariance to positional shifts. Here we show that this simple unsupervised mechanism, presented with a static environment scanned by smooth and saccadic eye movements can also learn eye-position invariance and coordinate transformations. As an example, a representation defined in retinal coordinates is transformed into one in head-centred coordinates. The neural tuning found in area 7a shows tuning modulated both by retinal position and eye position, which is consistent with the dual tuning properties of the input representation of our model. The network uses the constraint of temporal continuity without explicit teaching or reinforcement to learn the appropriate pooling connections to achieve a tuning effected only by head-centred coordinates, independently of retinal and eye positions. The network is demonstrated on 1-D and 2-D input image sequences. The mechanism is sufficiently general to apply to other coordinate transformations as well

    Models of sensory coding

    No full text
    SIGLEAvailable from British Library Document Supply Centre- DSC:9106.17(CUED/F-INFENG/TR--91) / BLDSC - British Library Document Supply CentreGBUnited Kingdo
    corecore